Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
This paper shows that dropout training in generalized linear models is the minimax solution of a two-player, zero-sum game where an adversarial nature corrupts a statistician's covariates using a multiplicative nonparametric errors-in-variables model. In this game, nature's least favorable distribution is dropout noise, where nature independently deletes entries of the covariate vector with some fixed probability δ. This result implies that dropout training indeed provides out-of-sample expected loss guarantees for distributions that arise from multiplicative perturbations of in-sample data. The paper makes a concrete recommendation on how to select the tuning parameter δ. The paper also provides a novel, parallelizable, unbiased multi-level Monte Carlo algorithm to speed-up the implementation of dropout training. Our algorithm has a much smaller computational cost compared to the naive implementation of dropout, provided the number of data points is much smaller than the dimension of the covariate vector.more » « less
-
Two recent strands of the structural vector autoregression literature use higher moments for identification, exploiting either non-Gaussianity or heteroskedasticity. These approaches achieve point identification without exclusion or sign restrictions. We review this work critically and contrast its goals with the separate research program that has pushed for macroeconometrics to rely more heavily on credible economic restrictions. Identification from higher moments imposes stronger assumptions on the shock process than second-order methods do. We recommend that these assumptions be tested. Since inference from higher moments places high demands on a finite sample, weak identification issues should be given priority by applied users.more » « less
-
Abstract Different agents need to make a prediction. They observe identical data, but have different models: they predict using different explanatory variables. We study which agent believes they have the best predictive ability—as measured by the smallest subjective posterior mean squared prediction error—and show how it depends on the sample size. With small samples, we present results suggesting it is an agent using a low-dimensional model. With large samples, it is generally an agent with a high-dimensional model, possibly including irrelevant variables, but never excluding relevant ones. We apply our results to characterize the winning model in an auction of productive assets, to argue that entrepreneurs and investors with simple models will be overrepresented in new sectors, and to understand the proliferation of “factors” that explain the cross-sectional variation of expected stock returns in the asset-pricing literature.more » « less
-
null (Ed.)Applied macroeconomists often compute confidence intervals for impulse responses using local projections, that is, direct linear regressions of future outcomes on current covariates. This paper proves that local projection inference robustly handles two issues that commonly arise in applications: highly persistent data and the estimation of impulse responses at long horizons. We consider local projections that control for lags of the variables in the regression. We show that lag‐augmented local projections with normal critical values are asymptotically valid uniformly over (i) both stationary and non‐stationary data, and also over (ii) a wide range of response horizons. Moreover, lag augmentation obviates the need to correct standard errors for serial correlation in the regression residuals. Hence, local projection inference is arguably both simpler than previously thought and more robust than standard autoregressive inference, whose validity is known to depend sensitively on the persistence of the data and on the length of the horizon.more » « less
An official website of the United States government

Full Text Available